-
Notifications
You must be signed in to change notification settings - Fork 11
Consolidate UI-Vite API usage onto backend adapter #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: tikazyq <[email protected]>
Co-authored-by: tikazyq <[email protected]>
tikazyq
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot resolve conflicts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR consolidates duplicate HTTP API logic from api.ts into the BackendAdapter pattern, enabling future Tauri support while maintaining backward compatibility. The change migrates all API consumers to use the backend adapter as the single source of truth for HTTP communication.
Key changes:
- Extended
BackendAdapterinterface with project CRUD, validation, context files, directory listing, and spec search methods - Migrated HTTP fetch logic from
api.tstoHttpBackendAdapter, reusing error handling patterns - Simplified
api.tsto delegate all calls throughgetBackend()while preserving exported utilities and types
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| specs/201-ui-vite-backend-adapter-migration/README.md | Updated spec status to in-progress and documented completed implementation tasks |
| packages/ui-vite/src/lib/backend-adapter.ts | Added project CRUD, validation, context file, and search methods to adapter interface with HTTP and Tauri implementations |
| packages/ui-vite/src/lib/api.ts | Removed duplicate HTTP logic, delegated to backend adapter, exported utilities and types |
| packages/ui-vite/src/lib/api.test.ts | Updated test expectations to match adapter-based responses and Rust payload structures |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
|
|
||
| async getContextFiles(): Promise<ContextFileListItem[]> { | ||
| const data = await this.fetchAPI<{ files?: ContextFileListItem[] }>('/api/context'); |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The response type allows files to be optional, but the endpoint likely always returns this field. Consider making files required in the type definition to accurately reflect the API contract.
| const data = await this.fetchAPI<{ files?: ContextFileListItem[] }>('/api/context'); | |
| const data = await this.fetchAPI<{ files: ContextFileListItem[] }>('/api/context'); |
| const date = new Date(value); | ||
| return Number.isNaN(date.getTime()) ? null : date; | ||
| } | ||
|
|
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The estimateTokenCount function is now exported but lacks documentation explaining its purpose, algorithm, and use cases. Add a JSDoc comment describing what it estimates and why the 1.15 multiplier is used.
| /** | |
| * Estimates the number of language‑model tokens in a text string. | |
| * | |
| * This is a lightweight heuristic that: | |
| * 1. Trims the input string. | |
| * 2. Splits on one or more whitespace characters to approximate the word count. | |
| * 3. Multiplies the word count by 1.15 to account for punctuation, short words, | |
| * and other tokenization overhead that typically makes token counts slightly | |
| * higher than raw word counts. | |
| * 4. Rounds up to the nearest integer and ensures a minimum of 1. | |
| * | |
| * This estimate is intended for coarse budgeting (e.g., deciding how much content | |
| * can be sent to an LLM within a token limit), not for exact accounting, and may | |
| * differ from the true token count of a specific tokenizer. | |
| * | |
| * @param content - The text content to estimate token usage for. | |
| * @returns An upper‑biased estimate of the number of tokens. | |
| */ |
Co-authored-by: tikazyq <[email protected]>
Original prompt
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.